Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@travetto/exec

Package Overview
Dependencies
Maintainers
1
Versions
110
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@travetto/exec

Common wrapper around process execution with high level docker support.

  • 0.2.11
  • Source
  • npm
  • Socket score

Version published
Maintainers
1
Created
Source

travetto: Exec

The exec module provides the necessary foundation for calling executables at runtime. Additionally special attention is provided to running docker containers.

Simple Execution

Just like child_process, the module exposes spawn, fork, and exec. These are generally wrappers around the underlying functionality. In addition to the base functionality, each of those functions is converted to a Promise structure, that throws an error on an non-zero return status.

A simple example would be

async function executeListing() {
  const [process, resultPromise] = spawn('ls');
  await resultPromise;
}

As you can see, the call returns not only the child process information, but the Promise to wait for. Additionally, some common patterns are provided for the default construction of the child process. In addition to the standard options for running child processes, the module also supports:

  • timeout as the number of milliseconds the process can run before terminating and throwing an error
  • quiet which suppresses all stdout/stderr output
  • stdin as a string, buffer or stream to provide input to the program you are running;
  • timeoutKill allows for registering functionality to execute when a process is force killed by timeout

Inter-process Communication (IPC)

Node provides ipc functionality out of the box, and this module builds upon this by providing enhanced event management functionality, as well as constructs for orchestrating multi-step processes.

Docker Support

Docker provides a unified way of executing external programs with a high level of consistency and simplicity. For that reason, the framework leverages this functionality to provide a clean cross-platform experience. The docker functionality allows you to interact with containers in two ways:

  • Invoke a single operation against a container
  • Spin up a container and run multiple executions against it. In this format, the container, once started, will be scheduled to terminate on Shutdown of the application.
async function runMongo() {
  const port = 10000;
  const container = new DockerContainer('mongo:latest')
    .createTempVolume('/var/workspace')
    .exposePort(port)
    .setWorkingDir('/var/workspace')
    .forceDestroyOnShutdown();

  container.run('--storageEngine', 'ephemeralForTest', '--port', port);
  await DockerContainer.waitForPort(port);

  return;
}

Command Service

While docker containers provide a high level of flexibility, performance can be an issue. CommandService is a construct that wraps execution of a specific child program. It allows for the application to decide between using docker to invoke the child program or calling the binary against the host operating system. This is especially useful in environments where installation of programs (and specific versions) is challenging.

  const converter = new CommandService({
    image: 'agregad/pngquant',
    checkForLocal: async () => {
      return (await spawn('pngquant -h')[1]).valid;
    }
  });

  async function compress(img) {
    const [proc, prom] = await converter.exec('pngquant', '--quality', '40-80', '--speed 1', '--force', '-');
    const out = `${img}.compressed`;

    fs.createReadStream(img).pipe(proc.stdin);
    proc.stdout.pipe(fs.createWriteStream(out));
    
    await prom;
  }

Execution Pools

With respect to managing multiple executions, ExecutionPool is provided to allow for concurrent operation, and processing of jobs as quickly as possible.

To manage the flow of jobs, there are various DataExecutionSource implementation that allow for a wide range of use cases.

The supported DataExecutionSources are

  • Array is a list of jobs, will execute in order until list is exhausted.
  • Queue is similar to list but will execute forever waiting for new items to be added to the queue.
  • Iterator is a generator function that will continue to produce jobs until the iterator is exhausted.

Below is a pool that will convert images on demand, while queuing as needed.

class ImageProcessor {
  active = false;
  proc: ChildProcess;

  kill() {
    this.proc.kill();
  }

  async convert(path: string) {
    this.active = true;
    try {
      this.proc = ...convert ...
      await this.proc;
    } catch (e) {

    }
    this.active = false;
  }
}

class ImageCompressor {
  pendingImages: QueueExecutionSource<string>;

  pool = new ExecutionPool(async () => {
    return new ImageProcessor();
  });

  constructor() {
    this.pool.process(this.pendingImages, async (inp, exe) => {
      exe.convert(inp);
    });
  }

  convert(...images: string[]) {
    for (const img of images) {
      this.pendingImages.enqueue(img);
    }
  }
}

Keywords

FAQs

Package last updated on 27 Aug 2018

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc